A Memory-Efficient Learning Framework for Symbol Level Precoding with Quantized NN Weights

نویسندگان

چکیده

This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on DNN with realistic finite precision weights and adopt an unsupervised learning (DL) based SLP model (SLP-DNet). apply stochastic quantization (SQ) technique to obtain its corresponding quantized version called SLP-SQDNet. The proposed scheme offers scalable performance vs memory trade-off, by quantizing percentage of the weights, we explore binary ternary quantizations. Our results show that while SLP-DNet provides near-optimal performance, versions through SQ yield ~3.46× ~2.64× compression for binary-based ternary-based SLP-SQDNets, respectively. also find our proposals offer ~20× ~10× computational complexity reductions compared optimization-based SLP-DNet,

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Symbol-Level Multiuser MISO Precoding for Multi-level Adaptive Modulation: A Multicast View

Symbol-level precoding is a new paradigm for multiuser multiple-antenna downlink systems which aims at creating constructive interference among the transmitted data streams. This can be enabled by designing the precoded signal of the multiantenna transmitter on a symbol level, taking into account both channel state information and data symbols. Previous literature has studied this paradigm for ...

متن کامل

Symbol-level precoding is symbol-perturbed ZF when energy Efficiency is sought

This paper considers symbol-level precoding (SLP) for multiuser multiple-input single-output (MISO) downlink. SLP is a nonlinear precoding scheme that utilizes symbol constellation structures. It has been shown that SLP can outperform the popular linear beamforming scheme. In this work we reveal a hidden connection between SLP and linear beamforming. We show that under an energy minimization de...

متن کامل

Quantized Precoding for Multi-Antenna Downlink Channels with MAGIQ

A multi-antenna, greedy, iterative, and quantized (MAGIQ) precoding algorithm is proposed for downlink channels. MAGIQ allows a straightforward integration with orthogonal frequency-division multiplexing (OFDM). MAGIQ is compared to three existing algorithms in terms of information rates and complexity: quantized linear precoding (QLP), SQUID, and an ADMM-based algorithm. The information rate i...

متن کامل

Symbol-level and Multicast Precoding for Multiuser Multiantenna Downlink: A Survey, Classification and Challenges

Precoding has been conventionally considered as an effective means of mitigating the interference and efficiently exploiting the available in the multiantenna downlink channel, where multiple users are simultaneously served with independent information over the same channel resources. The early works in this area were focused on transmitting an individual information stream to each user by cons...

متن کامل

A New Learning Algorithm for Neural Networks with Integer Weights and Quantized Non-linear Activation Functions

The hardware implementation of neural networks is a fascinating area of research with for reaching applications. However, the real weights and non-linear activation function are not suited for hardware implementation. A new learning algorithm, which trains neural networks with integer weights and excludes derivatives from the training process, is presented in this paper. The performance of this...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE open journal of the Communications Society

سال: 2023

ISSN: ['2644-125X']

DOI: https://doi.org/10.1109/ojcoms.2023.3285790